Artificial intelligence is to teach machines to take actions like humans. To achieve intelligent teaching, the machine learning community becomes to think about a promising topic named machine teaching where the teacher is to design the optimal (usually minimal) teaching set given a target model and a specific learner. However, previous works usually require numerous teaching examples along with large iterations to guide learners to converge, which is costly. In this paper, we consider a more intelligent teaching paradigm named one-shot machine teaching which costs fewer examples to converge faster. Different from typical teaching, this advanced paradigm establishes a tractable mapping from the teaching set to the model parameter. Theoretically, we prove that this mapping is surjective, which serves to an existence guarantee of the optimal teaching set. Then, relying on the surjective mapping from the teaching set to the parameter, we develop a design strategy of the optimal teaching set under appropriate settings, of which two popular efficiency metrics, teaching dimension and iterative teaching dimension are one. Extensive experiments verify the efficiency of our strategy and further demonstrate the intelligence of this new teaching paradigm.
translated by 谷歌翻译
包含多种类型的节点和边缘的异质图在各种领域都普遍存在,包括书目网络,社交媒体和知识图。作为分析异质图的基本任务,相关度量旨在计算不同类型的两个对象之间的相关性,这些对象已在许多应用程序中使用,例如Web搜索,建议和社区检测。大多数现有的相关性措施都集中在对象具有相同类型的均质网络上,并为异质图制定了一些措施,但它们通常需要预定义的元路径。定义有意义的元路径需要大量的领域知识,这在很大程度上限制了其应用,尤其是在诸如知识图之类的图形富含模式的异质图上。最近,图形神经网络(GNN)已被广泛应用于许多图挖掘任务,但尚未用于测量相关性。为了解决上述问题,我们提出了一种基于GNN的新型相关性措施,即GSIM。具体而言,我们首先是理论上分析的,并表明GNN有效地测量图中节点的相关性。然后,我们建议基于上下文路径的图形神经网络(CP-GNN)自动利用异质图中的语义。此外,我们利用CP-GNN来支持任何类型的两个对象之间的相关性度量。广泛的实验表明,GSIM优于现有措施。
translated by 谷歌翻译
大数据学习为人工智能(AI)带来了成功,但是注释和培训成本很昂贵。将来,对小数据的学习是AI的最终目的之一,它要求机器识别依靠小数据作为人类的目标和场景。一系列的机器学习模型正在进行这种方式,例如积极学习,几乎没有学习,深度聚类。但是,其概括性能几乎没有理论保证。此外,它们的大多数设置都是被动的,也就是说,标签分布由一个指定的采样方案明确控制。这项调查遵循PAC(可能是近似正确)框架下的不可知论活动采样,以分析使用有监督和无监督的时尚对小数据学习的概括误差和标签复杂性。通过这些理论分析,我们从两个几何学角度对小数据学习模型进行了分类:欧几里得和非欧几里得(双曲线)平均表示,在此还提供了优化解决方案和讨论。稍后,然后总结了一些可能从小型数据学习中受益的潜在学习方案,还分析了它们的潜在学习方案。最后,还调查了一些具有挑战性的应用程序,例如计算机视觉,自然语言处理可能会受益于小型数据学习。
translated by 谷歌翻译
主动学习最大化假设更新,以找到那些所需的未标记数据。一个固有的假设是,这种学习方式可以将这些更新得出到最佳假设中。但是,如果这些增量更新是负面和无序的,则可能无法很好地保证其收敛性。在本文中,我们介绍了一位机器老师,该教师为主动学习者提供了一个黑盒教学假设,其中教学假设是最佳假设的有效近似。从理论上讲,我们证明,在这一教学假设的指导下,学习者可以比那些没有从老师那里获得任何指导的受过教育的学习者融合到更严格的概括错误和标签复杂性。我们进一步考虑了两种教学方案:教授白盒和黑盒学习者,首先提出了教学的自我完善以改善教学表现。实验验证了这一想法并表现出比基本的积极学习策略(例如Iwal,Iwal-D等)更好的性能。
translated by 谷歌翻译
如今,对大规模数据的深入学习是主导的。空前的数据规模可以说是深度学习成功的最重要的驱动力之一。但是,仍然存在收集数据或标签可能非常昂贵的场景,例如医学成像和机器人技术。为了填补这一空白,本文考虑了使用少量代表性数据从头开始研究的问题。首先,我们通过在球形歧管的同构管上积极学习来表征这个问题。这自然会产生可行的假设类别。使用同源拓扑特性,我们确定了一个重要的联系 - 发现管歧管等同于最大程度地减少物理几何形状中的超球能(MHE)。受此连接的启发,我们提出了一种基于MHE的主动学习(MHEAL)算法,并为MHEAL提供了全面的理论保证,涵盖了收敛和概括分析。最后,我们证明了MHEAL在数据效率学习的广泛应用中的经验表现,包括深度聚类,分布匹配,版本空间采样和深度积极学习。
translated by 谷歌翻译
与淘宝和亚马逊等大型平台不同,由于严重的数据分配波动(DDF)问题,在小规模推荐方案中开发CVR模型是更具挑战性的。 DDF防止现有的CVR模型自生效以来,因为1)需要几个月的数据需要足够小的场景训练CVR模型,导致培训和在线服务之间的相当大的分布差异; 2)电子商务促销对小型情景产生了更大的影响,导致即将到期的时间段的不确定性。在这项工作中,我们提出了一种名为MetacVR的新型CVR方法,从Meta学习的角度解决了DDF问题。首先,由特征表示网络(FRN)和输出层组成的基础CVR模型是精心设计和培训的,在几个月内与样品充分设计和培训。然后,我们将不同数据分布的时间段视为不同的场合,并使用相应的样本和预先训练的FRN获得每个场合的正面和负原型。随后,设计了距离度量网络(DMN)以计算每个样本和所有原型之间的距离度量,以便于减轻分布不确定性。最后,我们开发了一个集合预测网络(EPN),该网络(EPN)包含FRN和DMN的输出以进行最终的CVR预测。在这个阶段,我们冻结了FRN并用最近一段时间的样品训练DMN和EPN,因此有效地缓解了分布差异。据我们所知,这是在小规模推荐方案中针对DDF问题的CVR预测第一次研究。实验结果对现实世界数据集验证了我们的MetacVR和Online A / B测试的优越性也表明我们的模型在PCVR上实现了11.92%的令人印象深刻的收益和GMV的8.64%。
translated by 谷歌翻译
促销活动在电子商务平台上变得更加重要和普遍,以吸引客户和提升销售。但是,推荐系统中的点击率(CTR)预测方法无法处理此类情况,因为:1)他们无法概括为服务,因为在线数据分布是不确定的,因为可能正在推出的促销潜在的促销; 2)在不够重视方案信号的情况下,它们无法学习在每个场景中共存的不同特征表示模式。在这项工作中,我们提出了方案自适应混合的专家(相同),这是一个简单而有效的模型,用于促销和正常情况。从技术上讲,它通过采用多个专家来学习专家来遵循专家混合的想法,这些特征表示通过注意机制通过特征门控网络(FGN)进行调制。为了获得高质量的表示,我们设计了一个堆叠的并行关注单元(SPAU),以帮助每个专家更好地处理用户行为序列。为了解决分布不确定性,从时间序列预测的角度精确地设计了一组场景信号,并馈入FGN,其输出与来自每个专家的特征表示连接,以学会注意。因此,特征表示的混合是自适应的场景和用于最终的CTR预测。通过这种方式,每个专家都可以学习鉴别的表示模式。据我们所知,这是第一次推广感知CTR预测的研究。实验结果对现实世界数据集验证了同一的优势。在线A / B测试也表现出同样的促销期间在CTR上的显着增益和5.94%的IPV,分别在正常日内为3.93%和6.57%。
translated by 谷歌翻译
With the development of convolutional neural networks, hundreds of deep learning based dehazing methods have been proposed. In this paper, we provide a comprehensive survey on supervised, semi-supervised, and unsupervised single image dehazing. We first discuss the physical model, datasets, network modules, loss functions, and evaluation metrics that are commonly used. Then, the main contributions of various dehazing algorithms are categorized and summarized. Further, quantitative and qualitative experiments of various baseline methods are carried out. Finally, the unsolved issues and challenges that can inspire the future research are pointed out. A collection of useful dehazing materials is available at \url{https://github.com/Xiaofeng-life/AwesomeDehazing}.
translated by 谷歌翻译
Decompilation aims to transform a low-level program language (LPL) (eg., binary file) into its functionally-equivalent high-level program language (HPL) (e.g., C/C++). It is a core technology in software security, especially in vulnerability discovery and malware analysis. In recent years, with the successful application of neural machine translation (NMT) models in natural language processing (NLP), researchers have tried to build neural decompilers by borrowing the idea of NMT. They formulate the decompilation process as a translation problem between LPL and HPL, aiming to reduce the human cost required to develop decompilation tools and improve their generalizability. However, state-of-the-art learning-based decompilers do not cope well with compiler-optimized binaries. Since real-world binaries are mostly compiler-optimized, decompilers that do not consider optimized binaries have limited practical significance. In this paper, we propose a novel learning-based approach named NeurDP, that targets compiler-optimized binaries. NeurDP uses a graph neural network (GNN) model to convert LPL to an intermediate representation (IR), which bridges the gap between source code and optimized binary. We also design an Optimized Translation Unit (OTU) to split functions into smaller code fragments for better translation performance. Evaluation results on datasets containing various types of statements show that NeurDP can decompile optimized binaries with 45.21% higher accuracy than state-of-the-art neural decompilation frameworks.
translated by 谷歌翻译
Nearest-Neighbor (NN) classification has been proven as a simple and effective approach for few-shot learning. The query data can be classified efficiently by finding the nearest support class based on features extracted by pretrained deep models. However, NN-based methods are sensitive to the data distribution and may produce false prediction if the samples in the support set happen to lie around the distribution boundary of different classes. To solve this issue, we present P3DC-Shot, an improved nearest-neighbor based few-shot classification method empowered by prior-driven data calibration. Inspired by the distribution calibration technique which utilizes the distribution or statistics of the base classes to calibrate the data for few-shot tasks, we propose a novel discrete data calibration operation which is more suitable for NN-based few-shot classification. Specifically, we treat the prototypes representing each base class as priors and calibrate each support data based on its similarity to different base prototypes. Then, we perform NN classification using these discretely calibrated support data. Results from extensive experiments on various datasets show our efficient non-learning based method can outperform or at least comparable to SOTA methods which need additional learning steps.
translated by 谷歌翻译